Exploiting Task Parallelism with OpenCL: A Case Study
نویسندگان
چکیده
منابع مشابه
A Coordination Layer for Exploiting Task Parallelism with HPF
This paper introduces COLTHPF, a run{time support for exploiting task parallelism within HPF programs, which can be employed by a compiler of a high-level coordination language to structure a set of data-parallel HPF tasks according to popular paradigms of task-parallelism. We use COLTHPF to program a computer vision application and report the results obtained by running the application on an S...
متن کاملExploiting Multiple Levels of Parallelism in OpenMP: A Case Study
Most current shared{memory parallel programming environments are based on thread packages that allow the exploitation of a single level of parallelism. These thread packages do not enable the spawning of new parallelism from a previously activated parallel region. Current initiatives (like OpenMP) include in their deenition the exploitation of multiple levels of parallelism through the nesting ...
متن کاملExploiting Task-Level Parallelism Using pTask
This paper presents pTask— a system that allows users to automatically exploit dynamic task-level parallelism in sequential array-based C programs. The system employs compiler analysis to extract data usage information from the program, then uses this information at run-time to dynamically exploit concurrency and to enforce data-dependences. Experimental results using a prototype of the system ...
متن کاملLeveraging Parallelism with CUDA and OpenCL
Graphics processing units (GPUs), originally designed for computing and manipulating pixels, have become general-purpose processors capable of executing in excess of trillion calculations per second. Taking advantage of GPU’s compute power and commodity popularity, the field of computing systems is exhibiting a trend toward heterogeneous platforms consisting of a central processor integrated wi...
متن کاملExploiting Task and Data Parallelism on a Multicomputer
For many applications, achieving good performance on a private memory parallel computer requires exploiting data parallelism as well as task parallelism. Depending on the size of the input data set and the number of nodes (i.e., processors), diierent tradeoos between task and data parallelism are appropriate for a parallel system. Most existing compilers focus on only one of data parallelism an...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Signal Processing Systems
سال: 2018
ISSN: 1939-8018,1939-8115
DOI: 10.1007/s11265-018-1416-1